75 research outputs found
Novel deep learning methods for track reconstruction
For the past year, the HEP.TrkX project has been investigating machine
learning solutions to LHC particle track reconstruction problems. A variety of
models were studied that drew inspiration from computer vision applications and
operated on an image-like representation of tracking detector data. While these
approaches have shown some promise, image-based methods face challenges in
scaling up to realistic HL-LHC data due to high dimensionality and sparsity. In
contrast, models that can operate on the spacepoint representation of track
measurements ("hits") can exploit the structure of the data to solve tasks
efficiently. In this paper we will show two sets of new deep learning models
for reconstructing tracks using space-point data arranged as sequences or
connected graphs. In the first set of models, Recurrent Neural Networks (RNNs)
are used to extrapolate, build, and evaluate track candidates akin to Kalman
Filter algorithms. Such models can express their own uncertainty when trained
with an appropriate likelihood loss function. The second set of models use
Graph Neural Networks (GNNs) for the tasks of hit classification and segment
classification. These models read a graph of connected hits and compute
features on the nodes and edges. They adaptively learn which hit connections
are important and which are spurious. The models are scaleable with simple
architecture and relatively few parameters. Results for all models will be
presented on ACTS generic detector simulated data.Comment: CTD 2018 proceeding
Entangled Photon Pair Source Demonstrator using the Quantum Instrumentation Control Kit System
We report the first demonstration of using the Quantum Instrumentation and
Control Kit (QICK) system on RFSoCFPGA technology to drive an entangled photon
pair source and to detect the photon signals. With the QICK system, we achieve
high levels of performance metrics including coincidence-to-accidental ratio
exceeding 150, and entanglement visibility exceeding 95%, consistent with
performance metrics achieved using conventional waveform generators. We also
demonstrate simultaneous detector readout using the digitization functional of
QICK, achieving internal system synchronization time resolution of 3.2 ps. The
work reported in this paper represents an explicit demonstration of the
feasibility for replacing commercial waveform generators and time taggers with
RFSoC-FPGA technology in the operation of a quantum network, representing a
cost reduction of more than an order of magnitude
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Historically, high energy physics computing has been performed on large
purpose-built computing systems. These began as single-site compute facilities,
but have evolved into the distributed computing grids used today. Recently,
there has been an exponential increase in the capacity and capability of
commercial clouds. Cloud resources are highly virtualized and intended to be
able to be flexibly deployed for a variety of computing tasks. There is a
growing nterest among the cloud providers to demonstrate the capability to
perform large-scale scientific computing. In this paper, we discuss results
from the CMS experiment using the Fermilab HEPCloud facility, which utilized
both local Fermilab resources and virtual machines in the Amazon Web Services
Elastic Compute Cloud. We discuss the planning, technical challenges, and
lessons learned involved in performing physics workflows on a large-scale set
of virtualized resources. In addition, we will discuss the economics and
operational efficiencies when executing workflows both in the cloud and on
dedicated resources.Comment: 15 pages, 9 figure
Graph Neural Networks for Particle Reconstruction in High Energy Physics detectors
Pattern recognition problems in high energy physics are notably different
from traditional machine learning applications in computer vision.
Reconstruction algorithms identify and measure the kinematic properties of
particles produced in high energy collisions and recorded with complex detector
systems. Two critical applications are the reconstruction of charged particle
trajectories in tracking detectors and the reconstruction of particle showers
in calorimeters. These two problems have unique challenges and characteristics,
but both have high dimensionality, high degree of sparsity, and complex
geometric layouts. Graph Neural Networks (GNNs) are a relatively new class of
deep learning architectures which can deal with such data effectively, allowing
scientists to incorporate domain knowledge in a graph structure and learn
powerful representations leveraging that structure to identify patterns of
interest. In this work we demonstrate the applicability of GNNs to these two
diverse particle reconstruction problems
The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking
Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problem thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. We will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data
- …